7 research outputs found

    Video-based driver identification using local appearance face recognition

    Get PDF
    In this paper, we present a person identification system for vehicular environments. The proposed system uses face images of the driver and utilizes local appearance-based face recognition over the video sequence. To perform local appearance-based face recognition, the input face image is decomposed into non-overlapping blocks and on each local block discrete cosine transform is applied to extract the local features. The extracted local features are then combined to construct the overall feature vector. This process is repeated for each video frame. The distribution of the feature vectors over the video are modelled using a Gaussian distribution function at the training stage. During testing, the feature vector extracted from each frame is compared to each person’s distribution, and individual likelihood scores are generated. Finally, the person is identified as the one who has maximum joint-likelihood score. To assess the performance of the developed system, extensive experiments are conducted on different identification scenarios, such as closed set identification, open set identification and verification. For the experiments a subset of the CIAIR-HCC database, an in-vehicle data corpus that is collected at the Nagoya University, Japan is used. We show that, despite varying environment and illumination conditions, that commonly exist in vehicular environments, it is possible to identify individuals robustly from their face images. Index Terms — Local appearance face recognition, vehicle environment, discrete cosine transform, fusion. 1

    Framework zur Entwicklung, Bewertung und Analyse von Computer-Vision-Anwendungen im Kontext umfelderfassender Fahrerassistenzsysteme

    No full text
    Im Rahmen dieser Arbeit wurde das Softwaresystem Advanced Development & Analysis Framework (ADAF) entwickelt und in Kooperation mit der NISYS GmbH realisiert. Es ermöglicht die effiziente Entwicklung und Bewertung von umfelderfassenden Fahrerassistenzsystemen. Dazu zählen die Erfassung, Aufzeichnung und Wiedergabe von Sensordaten, ihre Darstellung, Annotation und Verarbeitung, sowie die Bewertung der Verarbeitungsergebnisse. Zentrales Merkmal von ADAF ist die Nachvollziehbarkeit und Reproduzierbarkeit der Verarbeitung und Bewertung, um im Fehlerfall eine detaillierte Analyse der Ursachen zu ermöglichen. Die Leistungsfähigkeit und Flexibilität von ADAF wird anhand konkreter Projekte demonstriert, die zeigen, wie ADAF den unterschiedlichen Problemstellungen gerecht wird. Schließlich wird der German Traffic Sign Recognition Benchmark (GTSRB) vorgestellt, ein umfangreicher Datensatz zur Bewertung von Verfahren zur Verkehrszeichenerkennung

    Video-based Face Recognition on Real-World Data

    No full text
    In this paper, we present the classification sub-system of a real-time video-based face identification system which recognizes people entering through the door of a laboratory. Since the subjects are not asked to cooperate with the system but are allowed to behave naturally, this application scenario poses many challenges. Continuous, uncontrolled variations of facial appearance due to illumination, pose, expression, and occlusion need to be handled to allow for successful recognition. Faces are classified by a local appearance-based face recognition algorithm. The obtained confidence scores from each classification are progressively combined to provide the identity estimate of the entire sequence. We introduce three different measures to weight the contribution of each individual frame to the overall classification decision. They are distanceto-model (DTM), distance-to-second-closest (DT2ND), and their combination. Both a k-nearest neighbor approach and a set of Gaussian mixtures are evaluated to produce individual frame scores. We have conducted closed set and open set identification experiments on a database of 41 subjects. The experimental results show that the proposed system is able to reach high correct recognition rates in a difficult scenario. 1

    The German Traffic Sign Recognition Benchmark: A multi-class classification competition

    No full text
    is a multi-category classification competition held at IJCNN 2011. Automatic recognition of traffic signs is required in advanced driver assistance systems and constitutes a challenging real-world computer vision and pattern recognition problem. A comprehensive, lifelike dataset of more than 50,000 traffic sign images has been collected. It reflects the strong variations in visual appearance of signs due to distance, illumination, weather conditions, partial occlusions, and rotations. The images are complemented by several precomputed feature sets to allow for applying machine learning algorithms without background knowledge in image processing. The dataset comprises 43 classes with unbalanced class frequencies. Participants have to classify two test sets of more than 12,500 images each. Here, the results on the first of these sets, which was used in the first evaluation stage of the two-fold challenge, are reported. The methods employed by the participants who achieved the best results are briefly described and compared to human traffic sign recognition performance and baseline results. I

    Detection of traffic signs in real-world images:the German traffic sign detection benchmark

    No full text
    Abstract — Real-time detection of traffic signs, the task of pinpointing a traffic sign’s location in natural images, is a challenging computer vision task of high industrial relevance. Various algorithms have been proposed, and advanced driver assistance systems supporting detection and recognition of traf-fic signs have reached the market. Despite the many competing approaches, there is no clear consensus on what the state-of-the-art in this field is. This can be accounted to the lack of comprehensive, unbiased comparisons of those methods. We aim at closing this gap by the ”German Traffic Sign Detection Benchmark ” presented as a competition at IJCNN 2013 (International Joint Conference on Neural Networks). We introduce a real-world benchmark data set for traffic sign detection together with carefully chosen evaluation metrics, baseline results, and a web-interface for comparing approaches. In our evaluation, we separate sign detection from classification, but still measure the performance on relevant categories of signs to allow for benchmarking specialized solutions. The considered baseline algorithms represent some of the most popular detection approaches such as the Viola-Jones detector based on Haar features and a linear classifier relying on HOG descriptors. Further, a recently proposed problem-specific algorithm exploiting shape and color in a model-based Hough-like voting scheme is evaluated. Finally, we present the best-performing algorithms of the IJCNN competition. I
    corecore